28 research outputs found

    Controlling Chaos Faster

    Full text link
    Predictive Feedback Control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive Feedback Control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original Predictive Feedback Control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period

    Adapting Predictive Feedback Chaos Control for Optimal Convergence Speed

    Full text link
    Stabilizing unstable periodic orbits in a chaotic invariant set not only reveals information about its structure but also leads to various interesting applications. For the successful application of a chaos control scheme, convergence speed is of crucial importance. Here we present a predictive feedback chaos control method that adapts a control parameter online to yield optimal asymptotic convergence speed. We study the adaptive control map both analytically and numerically and prove that it converges at least linearly to a value determined by the spectral radius of the control map at the periodic orbit to be stabilized. The method is easy to implement algorithmically and may find applications for adaptive online control of biological and engineering systems.Comment: 21 pages, 6 figure

    Closed-Form Treatment of the Interactions between Neuronal Activity and Timing-Dependent Plasticity in Networks of Linear Neurons

    Get PDF
    Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures

    Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation

    Full text link
    An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robot's locomotion control as a central pattern generator (CPG), sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs' oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation.Comment: 48 pages, 16 figures, Information Sciences 201

    How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models

    Get PDF
    It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia

    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison

    Get PDF
    A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the ÎŽ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications

    Mathematische Beschreibung Hebb'scher PlastizitÀt und deren Beziehung zu BestÀrkendem Lernen

    No full text
    Das menschliche Gehirn besteht aus mehreren Millionen Nervenzellen. Jedes dieser Neuronen besitzt Tausende von Verbindungen, den Synapsen, die nicht starr sind, sondern sich bestĂ€ndig Ă€ndern. Um diese synaptische PlastizitĂ€t zu beschreiben, wurden verschiedene mathematische Regeln aufgestellt, die ĂŒberwiegend dem Postulat von Hebb folgen. Donald Hebb schlug 1949 vor, dass sich Synapsen nur dann Ă€ndern, wenn die prĂ€synaptische AktivitĂ€t, d.h. die AktivitĂ€t der Synapse, welche zum Neuron hinfĂŒhrt, und die postsynaptische AktivitĂ€t, d.h. die AktivitĂ€t des Neurons selber, zusammenfallen. Eine allgemeine mathematische Beschreibung dieser einflussreichen Klasse von PlastizitĂ€tsregeln fehlt jedoch noch. Vorhandene Beschreibungen der Dynamik synaptischer Verbindungen unter Hebb"scher PlastizitĂ€t beschrĂ€nken sich auf entweder eine einzelne Synapse oder sehr einfache, stationĂ€re AktivitĂ€tsmuster. Trotzdem findet die Hebb"sche PlastiziĂ€t in verschiedenen Gebieten Anwendung, zum Beispiel in der klassischen Konditionierung. Bereits die Erweiterung zu operanter Konditionierung, als auch zu dem nahe verwandten bestĂ€rkenden Lernen, birgt jedoch Probleme. Bis jetzt konnte bestĂ€rkendes Lernen noch nicht lokal im kĂŒnstlichen Neuron implementiert werden, da die PlastizitĂ€t von verbundenen Synapsen von Faktoren abhĂ€ngt, die wiederum von der AktivitĂ€t nicht lokaler Neuronen abhĂ€ngig sind. In dieser Dissertation wird die PlastizitĂ€t von einzelnen Synapsen anhand eines neuen, auf Auto- und Kreuzkorrelationstermen basierenden theoretischen Rahmens beschrieben und analysiert. Hierdurch können verschiedene Regeln verglichen und somit auf deren StabilitĂ€t geschlossen werden. Dies ermöglicht gezielt Hebb"sche PlastizitĂ€tsregeln fĂŒr verschiedenste Systeme zu konstruieren. Zum Beispiel reicht ein zusĂ€tzlicher, die PlastizitĂ€t modulierender Faktor aus, um die Autokorrelation zu beseitigen. Weiterhin werden zwei bereits existierende Modelle generalisiert, was zu einer neuen, so genannten Variable Output Trace (VOT) PlastizitĂ€tsregel fĂŒhrt, welche spĂ€ter noch praktische Verwendung findet. Im nĂ€chsten Schritt wird diese Analyse so erweitert, dass die PlastiziĂ€t vieler Synapsen gleichzeitig berechnet werden kann. Mit dieser vollstĂ€ndigen analytischen Lösung kann die Dynamik von synaptischen Verbindungen sogar fĂŒr nichtstationĂ€re AktivitĂ€ten charakterisiert werden. So kann unter anderem die synaptische Entwicklung symmetrischer differentieller Hebb"scher PlastizitĂ€t vorhergesagt werden. Im letzten Abschnitt dieser Dissertation wird ein allgemeiner, aber einfacher Aufbau eines kleinen Netzwerks vorgestellt. Mit diesem Aufbau kann jede Hebb"sche PlastizitĂ€tsregel mit einer negativen Autokorrelation benutzt werden, um Temporal Difference Lernen, ein weit verbreiteter Algorithmus des bestĂ€rkenden Lernens, zu emulieren. Speziell wurde hier differentielle Hebb"sche PlastizitĂ€t mit einem modulierenden Faktor und die VOT PlastizitĂ€t, welche im ersten Teil entwickelt wurde, benutzt, um die assymptotische Gleichwertigkeit zum Temporal Difference Lernen zu beweisen. Im gleichen Zuge wurde auch die PraktikabilitĂ€t dieser Realisierung untersucht. Die im Rahmen dieser Dissertation entwickelten Ergebnisse erlauben nun verschiedene Hebb"sche Regeln und ihre Eigenschaften miteinander zu vergleichen. Weiterhin ist es jetzt zum ersten mal möglich, die PlastizitĂ€t vieler Synapsen mit sich kontinuierlich verĂ€ndernden AktivitĂ€ten gleichzeitig analytisch zu berechnen. Dies ist von Bedeutung fĂŒr alle Verhalten zeigenden Systeme (Maschinen, Tiere), deren Interaktion mit der Umwelt zu stark variierender neuronaler AktivitĂ€t fĂŒhrt

    Managing the Workload: an Experiment on Individual Decision Making and Performance

    Get PDF
    The present research investigates individual decision making regarding jobs scheduling, by means of a laboratory experiment based on the “Admission Test” of the University of Bologna, in which students have to allocate effort among several tasks in a limited timespan. The experiment includes three treatments that differ in the way the test is administered to participants: either with a fixed sequence of questions, or with a fixed time per task, or with no constraints. Results show large and significant heterogeneity in treatment effects. Constraints on the answering sequence or on the time allocation for each task improved the performance of those subjects who failed to efficiently allocate their effort among the tasks, whereas negative effects were found for students who were already good in self-organizing. The study has relevant policy implications for the organization of the workload in the labor force, when different types of workers are employed. Furthermore, important intuitions on the design of the university student-selection mechanisms are also discussed
    corecore